ZORRO: Valid, Sparse, and Stable Explanations in Graph Neural Networks

نویسندگان

چکیده

With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain understand decisions a network. Explanations for networks differ in principle from other input settings. It is important attribute decision features related instances connected by structure. We find that previous explanation generation approaches maximize mutual information between label distribution produced model be restrictive. Specifically, existing do not enforce explanations valid, sparse, or robust perturbations. In this paper, we lay down some fundamental principles an method should follow introduce metric RDT-Fidelity as measure explanation's effectiveness. propose novel approach Zorro based on xmlns:xlink="http://www.w3.org/1999/xlink">rate-distortion theory uses simple combinatorial procedure optimize RDT-Fidelity. Extensive experiments real synthetic datasets reveal produces sparser, stable, more faithful than network approaches.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Extracting Explanations from Neural Networks

The use of neural networks is still difficult in many application areas due to the lack of explanation facilities (the « black box » problem). An example of such applications is multiple criteria decision making (MCDM), applied to location problems having environmental impact. However, the concepts and methods presented are also applicable to other problem domains. These concepts show how to ex...

متن کامل

Sparse Neural Networks Topologies

We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use ...

متن کامل

Sparse Rectifier Neural Networks

Rectifying neurons are more biologically plausible than sigmoid neurons, which are more biologically plausible than hyperbolic tangent neurons (which work better for training multi-layer neural networks than sigmoid neurons). We show that networks of rectifying neurons yield generally better performance than sigmoid or tanh networks while creating highly sparse representations with true zeros, ...

متن کامل

Neural Networks and Graph Transformations

As the beginning of the area of artificial neural networks the introduction of the artificial neuron by McCulloch and Pitts is considered. They were inspired by the biological neuron. Since then many new networks or new algorithms for neural networks have been invented with the result. In most textbooks on (artificial) neural networks there is no general definition on what a neural net is but r...

متن کامل

Deep Sparse Rectifier Neural Networks

Rectifying neurons are more biologically plausible than logistic sigmoid neurons, which are themselves more biologically plausible than hyperbolic tangent neurons. However, the latter work better for training multi-layer neural networks than logistic sigmoid neurons. This paper shows that networks of rectifying neurons yield equal or better performance than hyperbolic tangent networks in spite ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Knowledge and Data Engineering

سال: 2023

ISSN: ['1558-2191', '1041-4347', '2326-3865']

DOI: https://doi.org/10.1109/tkde.2022.3201170